Optimization by Variational Bounding
نویسندگان
چکیده
We discuss a general technique that forms a differentiable bound on non-differentiable objective functions by bounding the function optimum by its expectation with respect to a parametric variational distribution. We describe sufficient conditions for the bound to be convex with respect to the variational parameters. As example applications we consider variants of sparse linear regression and SVM training. 1 Variational Optimization We consider the general problem of function minimization, min x f(x) for vector x. When f is differentiable and x continuous, optimization methods that use gradient information are typically preferred over non-gradient based approaches since they may take advantage of a locally optimal direction in which to search. However, in the case that f is not differentiable or x is discrete, gradient based approaches are not directly applicable. In that case, alternatives such as relaxation, coordinatewise optimization and stochastic approaches are popular. Our interest is to discuss another general class of methods that yield differentiable surrogate objectives for discrete x or non-differentiable f . The Variational Optimization (VO) approach is based on the bound f∗ = min x∈C f(x) ≤ 〈f(x)〉 p(x|θ) ≡ E(θ) where 〈·〉p denotes the expectation with respect to the probability density function p defined over the solution space C. The parameters θ of the distribution p(x|θ) can then be adjusted to minimize the upper bound E(θ). This bound can be trivially made tight provided the distribution p(x|θ) is flexible enough to allow all its mass to be placed in the optimal state x∗ = argminx∈C f(x). The variational bound is equivalent to an objective smoothed by convolution with the variational distribution, with the degree of smoothing increasing as the dispersion of the variational distribution increases. The gradient of E(θ) is given by
منابع مشابه
Variational 3D Shape Segmentation for Bounding Volume Computation
We propose a variational approach to computing an optimal segmentation of a 3D shape for computing a union of tight bounding volumes. Based on an affine invariant measure of e-tightness, the resemblance to ellipsoid, a novel functional is formulated that governs an optimization process to obtain a partition with multiple components. Refinement of segmentation is driven by application-specific e...
متن کاملVector Optimization Problems and Generalized Vector Variational-Like Inequalities
In this paper, some properties of pseudoinvex functions, defined by means of limiting subdifferential, are discussed. Furthermore, the Minty vector variational-like inequality, the Stampacchia vector variational-like inequality, and the weak formulations of these two inequalities defined by means of limiting subdifferential are studied. Moreover, some relationships between the vector vari...
متن کاملOptimization of Solution Regularized Long-wave Equation by Using Modified Variational Iteration Method
In this paper, a regularized long-wave equation (RLWE) is solved by using the Adomian's decomposition method (ADM) , modified Adomian's decomposition method (MADM), variational iteration method (VIM), modified variational iteration method (MVIM) and homotopy analysis method (HAM). The approximate solution of this equation is calculated in the form of series which its components are computed by ...
متن کاملVariational Chernoff Bounds for Graphical Models
Recent research has made significant progress on the problem of bounding log partition functions for exponential family graphical models. Such bounds have associated dual parameters that are often used as heuristic estimates of the marginal probabilities required in inference and learning. However these variational estimates do not give rigorous bounds on marginal probabilities, nor do they giv...
متن کاملSequential Optimality Conditions and Variational Inequalities
In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...
متن کامل